17 research outputs found

    Streaming Hardness of Unique Games

    Get PDF
    We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by p, the alphabet size of the Unique Game, gives a trivial p-approximation that can be computed in O(log n) space. Meanwhile, with high probability, a sample of O~(n) constraints suffices to estimate the optimal value to (1+epsilon) accuracy. We prove that any single-pass streaming algorithm that achieves a (p-epsilon)-approximation requires Omega_epsilon(sqrt n) space. Our proof is via a reduction from lower bounds for a communication problem that is a p-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downstream hardness results for streaming approximability for other CSP-like problems

    A Convergence Theory for Federated Average: Beyond Smoothness

    Full text link
    Federated learning enables a large amount of edge computing devices to learn a model without data sharing jointly. As a leading algorithm in this setting, Federated Average FedAvg, which runs Stochastic Gradient Descent (SGD) in parallel on local devices and averages the sequences only once in a while, have been widely used due to their simplicity and low communication cost. However, despite recent research efforts, it lacks theoretical analysis under assumptions beyond smoothness. In this paper, we analyze the convergence of FedAvg. Different from the existing work, we relax the assumption of strong smoothness. More specifically, we assume the semi-smoothness and semi-Lipschitz properties for the loss function, which have an additional first-order term in assumption definitions. In addition, we also assume bound on the gradient, which is weaker than the commonly used bounded gradient assumption in the convergence analysis scheme. As a solution, this paper provides a theoretical convergence study on Federated Learning.Comment: BigData 202

    Symmetric Sparse Boolean Matrix Factorization and Applications

    Get PDF
    In this work, we study a variant of nonnegative matrix factorization where we wish to find a symmetric factorization of a given input matrix into a sparse, Boolean matrix. Formally speaking, given MZm×m\mathbf{M}\in\mathbb{Z}^{m\times m}, we want to find W{0,1}m×r\mathbf{W}\in\{0,1\}^{m\times r} such that MWW0\| \mathbf{M} - \mathbf{W}\mathbf{W}^\top \|_0 is minimized among all W\mathbf{W} for which each row is kk-sparse. This question turns out to be closely related to a number of questions like recovering a hypergraph from its line graph, as well as reconstruction attacks for private neural network training. As this problem is hard in the worst-case, we study a natural average-case variant that arises in the context of these reconstruction attacks: M=WW\mathbf{M} = \mathbf{W}\mathbf{W}^{\top} for W\mathbf{W} a random Boolean matrix with kk-sparse rows, and the goal is to recover W\mathbf{W} up to column permutation. Equivalently, this can be thought of as recovering a uniformly random kk-uniform hypergraph from its line graph. Our main result is a polynomial-time algorithm for this problem based on bootstrapping higher-order information about W\mathbf{W} and then decomposing an appropriate tensor. The key ingredient in our analysis, which may be of independent interest, is to show that such a matrix W\mathbf{W} has full column rank with high probability as soon as m=Ω~(r)m = \widetilde{\Omega}(r), which we do using tools from Littlewood-Offord theory and estimates for binary Krawtchouk polynomials.Comment: 33 pages, to appear in Innovations in Theoretical Computer Science (ITCS 2022), v2: updated ref

    A Faster Quantum Algorithm for Semidefinite Programming via Robust IPM Framework

    Full text link
    This paper studies a fundamental problem in convex optimization, which is to solve semidefinite programming (SDP) with high accuracy. This paper follows from existing robust SDP-based interior point method analysis due to [Huang, Jiang, Song, Tao and Zhang, FOCS 2022]. While, the previous work only provides an efficient implementation in the classical setting. This work provides a novel quantum implementation. We give a quantum second-order algorithm with high-accuracy in both the optimality and the feasibility of its output, and its running time depending on log(1/ϵ)\log(1/\epsilon) on well-conditioned instances. Due to the limitation of quantum itself or first-order method, all the existing quantum SDP solvers either have polynomial error dependence or low-accuracy in the feasibility
    corecore